In [1]:
! jupyter nbconvert --execute --to html "II.1. Single Factor Models.ipynb" --HTMLExporter.theme=dark
This application is used to convert notebook files (*.ipynb)
        to various other formats.

        WARNING: THE COMMANDLINE INTERFACE MAY CHANGE IN FUTURE RELEASES.

Options
=======
The options below are convenience aliases to configurable class-options,
as listed in the "Equivalent to" description-line of the aliases.
To see all configurable class-options for some <cmd>, use:
    <cmd> --help-all

--debug
    set log level to logging.DEBUG (maximize logging output)
    Equivalent to: [--Application.log_level=10]
--show-config
    Show the application's configuration (human-readable format)
    Equivalent to: [--Application.show_config=True]
--show-config-json
    Show the application's configuration (json format)
    Equivalent to: [--Application.show_config_json=True]
--generate-config
    generate default config file
    Equivalent to: [--JupyterApp.generate_config=True]
-y
    Answer yes to any questions instead of prompting.
    Equivalent to: [--JupyterApp.answer_yes=True]
--execute
    Execute the notebook prior to export.
    Equivalent to: [--ExecutePreprocessor.enabled=True]
--allow-errors
    Continue notebook execution even if one of the cells throws an error and include the error message in the cell output (the default behaviour is to abort conversion). This flag is only relevant if '--execute' was specified, too.
    Equivalent to: [--ExecutePreprocessor.allow_errors=True]
--stdin
    read a single notebook file from stdin. Write the resulting notebook with default basename 'notebook.*'
    Equivalent to: [--NbConvertApp.from_stdin=True]
--stdout
    Write notebook output to stdout instead of files.
    Equivalent to: [--NbConvertApp.writer_class=StdoutWriter]
--inplace
    Run nbconvert in place, overwriting the existing notebook (only
            relevant when converting to notebook format)
    Equivalent to: [--NbConvertApp.use_output_suffix=False --NbConvertApp.export_format=notebook --FilesWriter.build_directory=]
--clear-output
    Clear output of current file and save in place,
            overwriting the existing notebook.
    Equivalent to: [--NbConvertApp.use_output_suffix=False --NbConvertApp.export_format=notebook --FilesWriter.build_directory= --ClearOutputPreprocessor.enabled=True]
--no-prompt
    Exclude input and output prompts from converted document.
    Equivalent to: [--TemplateExporter.exclude_input_prompt=True --TemplateExporter.exclude_output_prompt=True]
--no-input
    Exclude input cells and output prompts from converted document.
            This mode is ideal for generating code-free reports.
    Equivalent to: [--TemplateExporter.exclude_output_prompt=True --TemplateExporter.exclude_input=True --TemplateExporter.exclude_input_prompt=True]
--allow-chromium-download
    Whether to allow downloading chromium if no suitable version is found on the system.
    Equivalent to: [--WebPDFExporter.allow_chromium_download=True]
--disable-chromium-sandbox
    Disable chromium security sandbox when converting to PDF..
    Equivalent to: [--WebPDFExporter.disable_sandbox=True]
--show-input
    Shows code input. This flag is only useful for dejavu users.
    Equivalent to: [--TemplateExporter.exclude_input=False]
--embed-images
    Embed the images as base64 dataurls in the output. This flag is only useful for the HTML/WebPDF/Slides exports.
    Equivalent to: [--HTMLExporter.embed_images=True]
--sanitize-html
    Whether the HTML in Markdown cells and cell outputs should be sanitized..
    Equivalent to: [--HTMLExporter.sanitize_html=True]
--log-level=<Enum>
    Set the log level by value or name.
    Choices: any of [0, 10, 20, 30, 40, 50, 'DEBUG', 'INFO', 'WARN', 'ERROR', 'CRITICAL']
    Default: 30
    Equivalent to: [--Application.log_level]
--config=<Unicode>
    Full path of a config file.
    Default: ''
    Equivalent to: [--JupyterApp.config_file]
--to=<Unicode>
    The export format to be used, either one of the built-in formats
            ['asciidoc', 'custom', 'html', 'html_ch', 'html_embed', 'html_toc', 'latex', 'markdown', 'notebook', 'pdf', 'python', 'qtpdf', 'qtpng', 'rst', 'script', 'selectLanguage', 'slides', 'webpdf']
            or a dotted object name that represents the import path for an
            ``Exporter`` class
    Default: ''
    Equivalent to: [--NbConvertApp.export_format]
--template=<Unicode>
    Name of the template to use
    Default: ''
    Equivalent to: [--TemplateExporter.template_name]
--template-file=<Unicode>
    Name of the template file to use
    Default: None
    Equivalent to: [--TemplateExporter.template_file]
--theme=<Unicode>
    Template specific theme(e.g. the name of a JupyterLab CSS theme distributed
    as prebuilt extension for the lab template)
    Default: 'light'
    Equivalent to: [--HTMLExporter.theme]
--sanitize_html=<Bool>
    Whether the HTML in Markdown cells and cell outputs should be sanitized.This
    should be set to True by nbviewer or similar tools.
    Default: False
    Equivalent to: [--HTMLExporter.sanitize_html]
--writer=<DottedObjectName>
    Writer class used to write the
                                        results of the conversion
    Default: 'FilesWriter'
    Equivalent to: [--NbConvertApp.writer_class]
--post=<DottedOrNone>
    PostProcessor class used to write the
                                        results of the conversion
    Default: ''
    Equivalent to: [--NbConvertApp.postprocessor_class]
--output=<Unicode>
    Overwrite base name use for output files.
                Supports pattern replacements '{notebook_name}'.
    Default: '{notebook_name}'
    Equivalent to: [--NbConvertApp.output_base]
--output-dir=<Unicode>
    Directory to write output(s) to. Defaults
                                  to output to the directory of each notebook. To recover
                                  previous default behaviour (outputting to the current
                                  working directory) use . as the flag value.
    Default: ''
    Equivalent to: [--FilesWriter.build_directory]
--reveal-prefix=<Unicode>
    The URL prefix for reveal.js (version 3.x).
            This defaults to the reveal CDN, but can be any url pointing to a copy
            of reveal.js.
            For speaker notes to work, this must be a relative path to a local
            copy of reveal.js: e.g., "reveal.js".
            If a relative path is given, it must be a subdirectory of the
            current directory (from which the server is run).
            See the usage documentation
            (https://nbconvert.readthedocs.io/en/latest/usage.html#reveal-js-html-slideshow)
            for more details.
    Default: ''
    Equivalent to: [--SlidesExporter.reveal_url_prefix]
--nbformat=<Enum>
    The nbformat version to write.
            Use this to downgrade notebooks.
    Choices: any of [1, 2, 3, 4]
    Default: 4
    Equivalent to: [--NotebookExporter.nbformat_version]

Examples
--------

    The simplest way to use nbconvert is

            > jupyter nbconvert mynotebook.ipynb --to html

            Options include ['asciidoc', 'custom', 'html', 'html_ch', 'html_embed', 'html_toc', 'latex', 'markdown', 'notebook', 'pdf', 'python', 'qtpdf', 'qtpng', 'rst', 'script', 'selectLanguage', 'slides', 'webpdf'].

            > jupyter nbconvert --to latex mynotebook.ipynb

            Both HTML and LaTeX support multiple output templates. LaTeX includes
            'base', 'article' and 'report'.  HTML includes 'basic', 'lab' and
            'classic'. You can specify the flavor of the format used.

            > jupyter nbconvert --to html --template lab mynotebook.ipynb

            You can also pipe the output to stdout, rather than a file

            > jupyter nbconvert mynotebook.ipynb --stdout

            PDF is generated via latex

            > jupyter nbconvert mynotebook.ipynb --to pdf

            You can get (and serve) a Reveal.js-powered slideshow

            > jupyter nbconvert myslides.ipynb --to slides --post serve

            Multiple notebooks can be given at the command line in a couple of
            different ways:

            > jupyter nbconvert notebook*.ipynb
            > jupyter nbconvert notebook1.ipynb notebook2.ipynb

            or you can specify the notebooks list in a config file, containing::

                c.NbConvertApp.notebooks = ["my_notebook.ipynb"]

            > jupyter nbconvert --config mycfg.py

To see all available configurables, use `--help-all`.

[NbConvertApp] WARNING | pattern 'II.1. Single Factor Models.ipynb' matched no files

Single Factor Models¶

In [2]:
import yfinance as yf
from scipy import stats
import pandas as pd
import numpy as np
import plotly.express as px
from plotly.subplots import make_subplots
import plotly.graph_objects as go
import plotly.io as pio
In [3]:
######## FIX RELATIVE PATH
import sys
sys.path.append("../../")
import plotly_template
pio.templates.default = 'simple_white+blog_mra'
In [4]:
def to_jupyter_latex(df):
    print(df.head().to_latex(index=True, float_format="{:.2f}".format,).replace('\\toprule', '\\hline').replace('\\midrule', '\\hline').replace('\\bottomrule','\\hline').replace('tabular', 'array'))

Single Index Models¶

The CAPM equation suggests that the expected return on an asset is equal to the risk-free rate plus a risk premium. The risk premium is determined by the asset's beta, which measures its sensitivity to systematic risk. Systematic risk refers to the risk that cannot be eliminated through diversification and is associated with the overall market movements.

\begin{equation} E(R_{i}) = R_{f} + \beta_{i}(E(R_{M}) - R_{f}) \end{equation}

Alpha and Beta Estimation¶

  • Asset managers: weekl/monthly observations and costant parameters
  • Risk managers: daily observations and time-varying parameters assumption i.e. emwa, garch
\begin{equation} \beta_{i} = \frac {\sum_{t=1}^{T} (X_{t} - \bar{X}) (R_{it} - \bar{R_{i}})} {\sum_{t=1}^{T} (X_{t} - \bar{X_{t}})^2} \end{equation}\begin{equation} \hat{\alpha_{i}} = \bar{R_{i}} - \hat{\beta_{i}} \bar{X} \end{equation}
\begin{equation} s_{i} = \sqrt{\frac{RSS_{i}}{T-2}} \label{eq:vector_ray} \end{equation}

Reference to equation $\eqref{eq:vector_ray}$

Example¶

In [5]:
df = pd.read_excel(r"data/Examples_II.1.xls", sheet_name="EX_II.1.1").set_index("Date")
df.index = df.index.date
df = df.loc[:, :"MSFT"]
ret = df.pct_change()[1:] #returns
$$ \begin{array}{lrrr} \hline & SPX & NWL & MSFT \\ \hline 2000-01-10 & 0.02 & -0.02 & 0.01 \\ 2000-01-18 & -0.02 & -0.00 & -0.08 \\ 2000-01-24 & -0.06 & -0.01 & -0.05 \\ 2000-01-31 & 0.05 & 0.01 & 0.08 \\ 2000-02-07 & -0.03 & 0.00 & -0.06 \\ \hline \end{array} $$
In [6]:
reg_msft = stats.linregress(ret.SPX, ret.MSFT) # regress MSFT on SPX
reg_nwl = stats.linregress(ret.SPX, ret.NWL) # regress NWL on SPX
In [7]:
reg_msft
Out[7]:
LinregressResult(slope=1.104210318384485, intercept=-0.0006618592810449706, rvalue=0.5754570214718748, pvalue=1.8014269223103655e-36, stderr=0.07885975545252356, intercept_stderr=0.0017892738852325724)
In [8]:
### application of formula above
b = sum((ret["MSFT"] - ret["MSFT"].mean())*(ret["SPX"]  - ret["SPX"].mean())) / sum((ret["SPX"]  - ret["SPX"].mean())**2)
print(b)
a = ret["MSFT"].mean() - b*ret["SPX"].mean()
print(a)
1.1042103183844845
-0.0006618592810449704
In [9]:
beta_msft = reg_msft.slope
alpha_msft = reg_msft.intercept
stderr_msft = reg_msft.stderr
annual_vol_msft = stderr_msft * np.sqrt(52)

beta_nwl = reg_nwl.slope
alpha_nwl = reg_nwl.intercept
stderr_nwl = reg_nwl.stderr
annual_vol_nwl = stderr_nwl * np.sqrt(52)
In [10]:
print("MSFT")
print("Alpha: "+str(round(alpha_msft, 4)))
print("Beta: "+str(round(beta_msft, 4)))
print("Std Err: "+str(round(alpha_msft, 4)))
print("Annual Vol: "+str(round(annual_vol_msft, 4)))

print("")
print("WMT")
print("Alpha: "+str(round(alpha_nwl, 4)))
print("Beta: "+str(round(beta_nwl, 4)))
print("Std Err: "+str(round(stderr_nwl, 4)))
print("Annual Vol: "+str(round(annual_vol_nwl, 4)))
MSFT
Alpha: -0.0007
Beta: 1.1042
Std Err: -0.0007
Annual Vol: 0.5687

WMT
Alpha: 0.0036
Beta: 0.506
Std Err: 0.071
Annual Vol: 0.5118

Assume to construct a portfolio which invests 70% in MSFT and 30% in WMT

Return¶

At stocks level

$$ \hat{\alpha} = \hat{\alpha}_{MSFT}\cdot0.7 + \hat{\alpha}_{NWL}\cdot0.3 $$$$ \hat{\beta} = \hat{\beta}_{MSFT}\cdot0.7 + \hat{\beta}_{NWL}\cdot0.3 $$
In [11]:
print(alpha_msft*0.3 + alpha_nwl*0.7)
print(beta_msft*0.3 + beta_nwl*0.7)
0.0023084375088988653
0.6854319125642749

At portfolio level

$$ R_{PTF} = w_{MSFT}*R_{MSFT, t} + w_{NWL}*R_{NWL, t} $$
In [12]:
#define weights
w = np.full((len(ret), 2), [0.3, 0.7])
# portfolio returns
ret["PTF"] = ret[["MSFT", "NWL"]].mul(w).sum(axis=1)
$$\begin{array}{lrrrr} \hline & SPX & NWL & MSFT & PTF \\ \hline 2000-01-10 & 0.02 & -0.02 & 0.01 & -0.01 \\ 2000-01-18 & -0.02 & -0.00 & -0.08 & -0.03 \\ 2000-01-24 & -0.06 & -0.01 & -0.05 & -0.02 \\ 2000-01-31 & 0.05 & 0.01 & 0.08 & 0.04 \\ 2000-02-07 & -0.03 & 0.00 & -0.06 & -0.02 \\ \hline \end{array}$$
In [13]:
reg_ptf = stats.linregress(ret.SPX, ret.PTF)
alpha_ptf = reg_ptf.intercept
beta_ptf = reg_ptf.slope
print("Ptf alpha: " +str(reg_ptf.intercept))
print("Ptf beta: " +str(reg_ptf.slope))
Ptf alpha: 0.0023084375088988653
Ptf beta: 0.6854319125642752
In [14]:
linspace = np.arange(-0.1, 0.1,  0.01)
regline_msft = alpha_msft + beta_msft*linspace
regline_nwl = alpha_nwl + beta_nwl*linspace
regline_ptf = alpha_ptf + beta_ptf*linspace
In [15]:
fig = make_subplots(rows=1, cols=3, shared_yaxes='all', subplot_titles=("MSFT", "NWL", "PTF"))

fig.add_trace(
    go.Scatter(x=ret.SPX, y=ret.MSFT, mode="markers"),
    row=1, col=1)
fig.add_trace(go.Scatter(x=linspace, y=regline_msft), row=1, col=1)

fig.add_trace(
    go.Scatter(x=ret.SPX, y=ret.NWL,  mode="markers"),
    row=1, col=2)
fig.add_trace(go.Scatter(x=linspace, y=regline_nwl), row=1, col=2)

fig.add_trace(
    go.Scatter(x=ret.SPX, y=ret.PTF,  mode="markers"),
    row=1, col=3)
fig.add_trace(go.Scatter(x=linspace, y=regline_msft), row=1, col=3)
fig.add_trace(go.Scatter(x=linspace, y=regline_nwl), row=1, col=3)
fig.add_trace(go.Scatter(x=linspace, y=regline_ptf), row=1, col=3)

fig.update_layout(height=600, width=900, title_text="Regression over SPX")
fig.show()

Risk¶

In [16]:
df = pd.read_excel(r"data/Examples_II.1.xls", sheet_name="EX_II.1.2").set_index("Date")
df = df.loc[:, :"Cisco"]
df.index = df.index.date
ret = np.log(df/df.shift())[1:] #returns
$$\begin{array}{lrrr} \hline & SP100 & Amex & Cisco \\ \hline 2000-01-04 & -0.04 & -0.04 & -0.06 \\ 2000-01-05 & 0.00 & -0.02 & -0.00 \\ 2000-01-06 & 0.00 & 0.02 & -0.02 \\ 2000-01-07 & 0.03 & 0.01 & 0.06 \\ 2000-01-10 & 0.01 & 0.01 & 0.04 \\ \hline \end{array} $$
In [17]:
reg_amex = stats.linregress(ret.SP100, ret.Amex)
reg_cisco = stats.linregress(ret.SP100, ret.Cisco)
In [18]:
# standard error Amex
stderr_amex = (((ret.Amex - (reg_amex.intercept + reg_amex.slope*ret["SP100"]))**2).sum() / (len(ret)-2))**0.5
# standard error Cisco
stderr_cisco = (((ret.Cisco - (reg_cisco.intercept + reg_cisco.slope*ret["SP100"]))**2).sum() / (len(ret)-2))**0.5

At stock level (assuming uncorrelated returns)

$$ s_{PTF} = \sqrt{0.6^2 \cdot s_{AMEX}^2 + 0.4^2 \cdot s_{CISCO}^2} $$
In [19]:
print("s = "+str(round (((stderr_amex**2)*(0.6**2) + (stderr_cisco**2)*(0.4**2))**0.5 *np.sqrt(250), 4)))
s = 0.1998

At portfolio level

In [20]:
w = np.full((len(ret), 2), [0.6, 0.4])
# portfolio returns
ret["PTF"] = ret[["Amex", "Cisco"]].mul(w).sum(axis=1)
reg_ptf = stats.linregress(ret.SP100, ret.PTF)

# standard error Cisco
stderr_ptf = (((ret.PTF - (reg_ptf.intercept + reg_ptf.slope*ret["SP100"]))**2).sum() / (len(ret)-2))**0.5
print("s= " +str(round(stderr_ptf*np.sqrt(250), 4)))
s= 0.1819

Note: For returns, applying weights to alpha and betas for each stocks yields the same results as by using an OLS regression on the portfolio returns. However, it does make a difference for the specific risk of the portfolio!

Why: The reason is that specific risks are not uncorrelated

Solution: To compute specific risk correcly, it is recommended to first construct the constant weighted portfolio and then to apply OLS regression. Alternatively, we can calculate the covariance matrix with the regression residuals.

Estimate Risk with EWMA¶

$$ \hat{\beta}_{t}^{\lambda} = \frac{Cov_{\lambda}(X_{t}, Y_{t})}{V_{\lambda}(X_{t})} $$$$ Systematic Risk = \hat{\beta}_{t}^{\lambda} \sqrt{V_{\lambda}(X_{t})} \cdot \sqrt{h} $$
In [21]:
w_amex=0.6
lam=0.95
In [22]:
sq_ret = ret**2
df_ewma_var = pd.DataFrame(columns=sq_ret.columns, index=sq_ret.index) 

ewma = np.array([0.0001, 0.0001, 0.0001,  0.0001])
for row, item in sq_ret.iterrows():
    ewma = (1-lam)*item + ewma*lam
    df_ewma_var.loc[row, :] = ewma.values
In [23]:
mul_ret = ret[["Amex", "Cisco", "PTF"]].mul(ret["SP100"], axis=0)
df_ewma_cov = pd.DataFrame(columns=["Amex", "Cisco", "PTF"], index=sq_ret.index) 

ewma = np.array([0.0001, 0.0001, 0.0001])
for row, item in mul_ret.iterrows():
    ewma = (1-lam)*item + ewma*lam
    df_ewma_cov.loc[row, :] = ewma.values
In [24]:
emwa_beta = df_ewma_cov.div(df_ewma_var["SP100"], axis=0).iloc[:-1]
emwa_relvol = (df_ewma_var[["Amex", "Cisco", "PTF"]].div(df_ewma_var["SP100"], axis=0))**0.5
emwa_corr = emwa_beta/emwa_relvol
sys_risk = emwa_beta["PTF"]* (df_ewma_var["SP100"]*250)**0.5
emwa_beta["Ols_beta"] = reg_ptf.slope
In [25]:
fig = make_subplots(specs=[[{"secondary_y": True}]])
fig.add_trace(go.Scatter(x=emwa_beta.index, y=emwa_beta.PTF,  name="EWMA Beta"), )
fig.add_trace(go.Scatter(x=sys_risk.index, y=sys_risk.values, name="Systematic Risk"), secondary_y=True)
fig.add_trace(go.Scatter(x=emwa_beta.index, y=emwa_beta.Ols_beta, line_color="black", name="OLS Beta"))
fig.update_layout(height=600, width=900, title_text="Portfolio EWMA vs OLS Beta")
fig.show()

Relationship betwenn Beta, Correlation and Relative Volatility¶

$$\beta = \frac{Cov(X, Y)}{V(X)}$$
In [26]:
fig = make_subplots(specs=[[{"secondary_y": True}]])
fig.add_trace(go.Scatter(x=emwa_beta.index, y=emwa_beta.Amex, name="EWMA Beta Amex "), )
fig.add_trace(go.Scatter(x=emwa_relvol.index, y=emwa_relvol.Amex, name= "EWMA RelVol Amex"))
fig.add_trace(go.Scatter(x=emwa_corr.index, y=emwa_corr.Amex, name= "EWMA Corr Amex"), secondary_y=True)
fig.update_layout(yaxis = dict(range=[0, 5]), yaxis2 = dict(range=[0, 1]), title_text="Amex")
fig.show()
In [27]:
fig = make_subplots(specs=[[{"secondary_y": True}]])
fig.add_trace(go.Scatter(x=emwa_beta.index, y=emwa_beta.Cisco, name="EWMA Beta Cisco"))
fig.add_trace(go.Scatter(x=emwa_relvol.index, y=emwa_relvol.Cisco, name="EWMA RelVol Cisco"))
fig.add_trace(go.Scatter(x=emwa_corr.index, y=emwa_corr.Cisco, name="EWMA Corr Cisco"), secondary_y=True)
fig.update_layout(yaxis = dict(range=[0, 5]), yaxis2 = dict(range=[0, 1]),  title_text="Cisco")
fig.show()

Risk Decomposition¶

Taking the expectation and variance of (reference) and assuming $Cov(X, \epsilon) = 0$, we can see that: $$ E(Y) = \alpha + \beta E(X) $$ $$ V(Y) = \beta^2V(X) + V(\epsilon)$$ Hence the portfolio volatility can be decomposed into:

  • the sensitivity to the market factor beta
  • the volatility of the market factor
  • the specific risk $V(\epsilon)$

The (reference) can then be expressed as $$ TotalRisk = (SystemicRisk^2 + SpecificRisk^2)^{1/2} $$

Note:

  1. The equity beta ignores two of the three of risk
  2. Risk is not addictive, only volatility is
In [ ]:
 

TODO:

  • Cross reference equation
  • Adjust relative path
  • Implement time varying Beta
  • Why the risk of the portfolio $s_{ptf} $ is similar in section two
  • Charts
  • utils.py (with PCA and other functions)
In [ ]: